Session D-7

Network Functions and Tasking

Conference
10:00 AM — 11:30 AM EDT
Local
May 5 Thu, 10:00 AM — 11:30 AM EDT

An Efficient Two-Layer Task Offloading Scheme for MEC Networks with Multiple Services Providers

Ju Ren and Jiani Liu (Tsinghua University, China); Yongmin Zhang and Zhaohui Li (Central South University, China); Zhibo Wang (Zhejiang University, China); Feng Lyu (Central South University, China); Yaoxue Zhang (Tsinghua University, China)

0
With the explosive growth of mobile and IoT applications, increasing Mobile Edge Computing (MEC) systems have been developed by diverse edge service providers (ESPs), opening a new computing market with stiff competition. Considering the spatiotemporally varying features of computation tasks, taking over all the received tasks alone may greatly degrade the service performance of the MEC system and lead to poor economical benefit. To this end, this paper proposes a two-layer collaboration model for ESPs, in which each ESP not only can balance the computation workload among the internal edge nodes from the ESP, but also can offload part of computation tasks to the external edge servers from other ESPs. For internal load balancing, we propose an ADMM-based task balancing scheme to manage the computation tasks within the edge nodes of the ESP, such that the computation delay can be minimized. For external task offloading, we formulate a game based pricing and task allocation scheme to derive the best game strategy aiming at maximizing the total revenue of each ESP. Extensive simulation results demonstrate that the proposed schemes can achieve improved performance in terms of system revenue and stability, as well as computation delay.

Dyssect: Dynamic Scaling of Stateful Network Functions

Fabricio Carvalho (Federal University of Mato Grosso do Sul, Brazil); Ronaldo A. Ferreira (UFMS, Brazil); Italo Cunha (Universidade Federal de Minas Gerais, Brazil); Marcos A. M. Vieira (Federal University of Minas Gerais, Brazil); Murali K Ramanathan (Uber Technologies Inc, USA)

1
Network Function Virtualization promises better utilization of computational resources by dynamically scaling resources on demand. However, most network functions (NFs) are stateful and require state updates on a per-packet basis. During a scaling operation, cores need to synchronize access to a shared state to avoid race conditions and to guarantee that NFs process packets in arrival order. Unfortunately, the classic approach to control concurrent access to a shared state with locks does not scale to today's throughput and latency requirements. Moreover, network traffic is highly skewed, leading to load imbalances in systems that use only sharding to partition the NF states. To address these challenges, we present Dyssect, a system that enables dynamic scaling of stateful NFs by disaggregating the states of network functions. By carefully coordinating actions between cores and a central controller, Dyssect migrates shards and flows between cores for load balancing or traffic prioritization without resorting to locks or reordering packets. Our experimental evaluation shows that Dyssect reduces tail latency up to 32% and increases throughput up to 19.36% when compared to state-of-the-art competing solutions.

Network Synthesis under Delay Constraints: The Power of Network Calculus Differentiability

Fabien Geyer (Airbus, Germany); Steffen Bondorf (Ruhr University Bochum, Germany)

0
With the advent of standards for deterministic network behavior, synthesizing network designs under delay constraints becomes the natural next task to tackle. Network Calculus (NC) has become a key method for validating industrial networks, as it computes formally verified end-to-end delay bounds. However, analyses from the NC framework were thus far designed to bound one flow's delay at a time. Attempts to use classical analyses for derivation of a network configuration revealed this approach to be poorly fitted for practical use cases. Take finding a delay-optimal routing configuration: One model for each routing alternative had to be created, then each flow delay had to be bounded, then the bounds were compared to the given constraints. To overcome this three-step procedure, we introduce Differential Network Calculus. We extend NC to allow for differentiation of delay bounds w.r.t. to a wide range of network parameters - such as flow routes. This opens up NC to a class of efficient non-linear optimization techniques taking advantage of the delay bound computation's gradient. Our numerical evaluation on the routing problem shows that our novel method can synthesize flow path in a matter of seconds, outperforming existing methods by multiple orders of magnitude.

User Experience Oriented Task Computation for UAV-Assisted MEC System

Lutian Shen (Yunnan University, China)

0
Unmanned aerial vehicle (UAV)-assisted computation paradigms have been treated as the new common in the integrated space-air-ground networks in B5G and 6G era. However, providing services solely from the perspective of the UAVs can not ensure a high Quality of Experience (QoE) at the user devices (UDs). Therefore, this paper presents a user experience-oriented service provision model for UAV-aided mobile edge computing (MEC) systems. First, a novel metric called shrinkage ratio is defined to reflect the experience at the UDs, and then the promotion of the QoE is formulated as a shrinkage ratio minimization problem. Second, a three-step strategy is adopted to tackle the problem which is non-convex: 1) a posterior method is adopted to eliminate the unknown upper bounds; 2) the non-convex constraints are approximately converted to convex by applying the first order Taylor expansion and are handled by SCA technique; 3) an algorithm for derive an accurate estimation in step 1) is put forward. Third, an initialization scheme for UAV trajectory is proposed by applying TSP technique. Finally, numerical results are given to validate our proposed design.

Session Chair

Ana C Aguiar (University of Porto)

Session D-8

Online Learning

Conference
12:00 PM — 1:30 PM EDT
Local
May 5 Thu, 12:00 PM — 1:30 PM EDT

Online Learning-Based Rate Selection for Wireless Interactive Panoramic Scene Delivery

Harsh Gupta (University of Illinois at Urbana-Champaign, USA); Jiangong Chen and Bin Li (The Pennsylvania State University, USA); R. Srikant (University of Illinois at Urbana-Champaign, USA)

0
Interactive panoramic scene delivery not only consumes 4 ~ 6× more bandwidth than traditional video streaming of the same resolution but also requires timely displaying the delivered content to ensure smooth interaction. Since users can only see roughly 20% of the entire scene at a time (called the viewport), it is sufficient to deliver the relevant portion of the panoramic scene if we can accurately predict the user's motion. It is customary to deliver a portion larger than the viewport to tolerate inaccurate predictions. Intuitively, the larger the delivered portion, the higher the prediction accuracy and lower the wireless transmission success probability. The goal is to select an appropriate delivery portion to maximize system throughput. We formulate this problem as a multi-armed bandit problem and use the classical Kullback-Leibler Upper Confidence Bound (KL-UCB) algorithm for the portion selection. We further develop a novel variant of the KL-UCB algorithm that effectively leverages two-level feedback (i.e., both prediction and transmission outcomes) after each decision on the selected portion and show its asymptotical optimality, which may be of independent interest by itself. We demonstrate the superior performance of our proposed algorithms over existing heuristic methods using both synthetic simulations and real experimental evaluations.

Schedule or Wait: Age-Minimization for IoT Big Data Processing in MEC via Online Learning

Zichuan Xu and Wenhao Ren (Dalian University of Technology, China); Weifa Liang (City University of Hong Kong, Hong Kong); Wenzheng Xu (Sichuan University, China); Qiufen Xia (Dalian University of Technology, China); Pan Zhou (School of CSE, Huazhong University of Science and Technology, China); Mingchu Li (School of Software, Dalian University of Technology, China)

0
The age of data (AoD) is identified as one of the most novel and important metrics to measure the quality of big data analytics for Internet-of-Things (IoT) applications. Meanwhile, mobile edge computing (MEC) is envisioned as an enabling technology to minimize the AoD of IoT applications by processing the data in edge servers close to IoT devices. In this paper, we study the AoD minimization problem for IoT big data processing in MEC networks. We first propose an exact solution for the problem by formulating it as an Integer Linear Program (ILP). We then propose an efficient heuristic for the offline AoD minimization problem. We also devise an approximation algorithm with a provable approximation ratio for a special case of the problem, by leveraging the parametric rounding technique. We thirdly develop an online learning algorithm with a bounded regret for the online AoD minimization problem under dynamic arrivals of IoT requests and uncertain network delay assumptions, by adopting the Multi-Armed Bandit (MAB) technique. We finally evaluate the performance of the proposed algorithms by extensive simulations and implementations in a real test-bed. Results show that the proposed algorithms outperform existing approaches by reducing the AoD around 10%.

Sending Timely Status Updates through Channel with Random Delay via Online Learning

Haoyue Tang, Yuchao Chen, Jintao Wang and Jingzhou Sun (Tsinghua University, China); Jian Song (Tsinghua University & Beijing National Research Center for Information Science and Technology & Key Lab of DTV System of Guangdong & Shenzhen, Research Institute of Tsinghua University in Shenzhen, China)

0
In this work, we study a status update system with a source node sending timely information to the destination through a channel with random delay. We measure timeliness of the information stored at the receiver via the Age of Information (AoI), the time elapsed since the freshest sample stored at the receiver is generated. The goal is to design sampling strategies that minimize the total cost of the expected time average AoI and average sampling cost in the absence of transmission delay statistics. We reformulate the total sum minimization problem as the optimization of a renewal-reward process, and propose an online sampling strategy based on the Monro-Robbins algorithm. We prove that, when the transmission delay is bounded, the expected time average total cost converges to the minimum cost and the optimality gap decays with rate ..\mathcal{O}\left(\log K/K\right).., where ..K.. is the number of samples we have taken. Simulation results validate the performance of our proposed algorithm.

Socially-Optimal Mechanism Design for Incentivized Online Learning

Zhiyuan Wang (Beihang University, China); Lin Gao (Harbin Institute of Technology (Shenzhen), China); Jianwei Huang (The Chinese University of Hong Kong, Shenzhen, China)

0
Multi-arm bandit (MAB) is a classic online learning framework that studies the sequential decision-making in an uncertain environment. The MAB framework, however, overlooks the scenario where the decision-maker cannot take actions (e.g., pulling arms) directly. It is a practically important scenario in many applications such as spectrum sharing, crowdsensing, and edge computing. In these applications, the decision-maker would incentivize other selfish agents to carry out desired actions (i.e., pulling arms on the decision-maker's behalf). This paper establishes the incentivized online learning (IOL) framework for this scenario. The key challenge to design the IOL framework lies in the tight coupling of the unknown environment learning and asymmetric information revelation. To address this, we construct a special Lagrangian function based on which we propose a socially-optimal mechanism for the IOL framework. Our mechanism satisfies various desirable properties such as agent fairness, incentive compatibility, and voluntary participation. It achieves the same asymptotic performance as the state-of-art benchmark that requires extra information. Our analysis also unveils the power of crowd in the IOL framework: a larger agent crowd enables our mechanism to approach more closely the theoretical upper bound of social performance. Numerical results demonstrate the advantages of our mechanism in large-scale edge computing.

Session Chair

Bo Ji (Virginia Tech)

Session D-9

Reinforcement Learning

Conference
2:30 PM — 4:00 PM EDT
Local
May 5 Thu, 2:30 PM — 4:00 PM EDT

Cost Effective MLaaS Federation: A Combinatorial Reinforcement Learning Approach

Shuzhao Xie and Yuan Xue (Tsinghua University, China); Yifei Zhu (Shanghai Jiao Tong University, China); Zhi Wang (Tsinghua University, China)

0
With the advancement of deep learning techniques, major cloud providers and niche machine learning service providers start to offer their cloud-based machine learning tools, also known as machine learning as a service (MLaaS), to the public. According to our measurement, for the same task, these MLaaSes from different providers have varying performance due to the proprietary datasets, models, etc. Federating different MLaaSes together allows us to improve the analytic performance further. However, naively aggregating results from different MLaaSes not only incurs significant momentary cost but also may lead to sub-optimal performance gain due to the introduction of possible false positive results. In this paper, we propose Armol, a framework to federate the right selection of MLaaS providers to achieve the best possible analytic performance. We first design a word grouping algorithm to unify the output labels across different providers. We then present a deep combinatorial reinforcement learning based-approach to maximize the accuracy while minimizing the cost. The predictions from the selected providers are then aggregated together using carefully chosen ensemble strategies. The real-world trace-driven evaluation further demonstrates that Armol is able to achieve the same accuracy results with 67% less inference cost.

Landing Reinforcement Learning onto Smart Scanning of The Internet of Things

Jian Qu and Xiaobo Ma (Xi'an Jiaotong University, China); Wenmao Liu and Hongqing Sang (NSFOCUS Inc., China); Jianfeng Li (Xi'an Jiaotong University, China); Lei Xue and Xiapu Luo (The Hong Kong Polytechnic University, Hong Kong); Zhenhua Li (Tsinghua University, China); Li Feng (Center of Dependable and Secure Computing (CDSC), China); Xiaohong Guan (Xi'an Jiaotong University & Tsinghua University, China)

0
Cyber search engines, such as Shodan and Censys, have gained popularity due to their strong capability of indexing the Internet of Things (IoT). They actively scan and fingerprint IoT devices for unearthing IP-device mapping. Because of the large address space of the Internet and the mapping's mutative nature, efficiently tracking the evolution of IP-device mapping with a limited budget of scans is essential for building timely cyber search engines. An intuitive solution is to use reinforcement learning to schedule more scans to networks with high churn-rates of IP-device mapping, which can be learned from historical scanning records. However, such an intuitive solution has never been systematically studied. In this paper, we take the first step toward demystifying this problem based on our experiences in maintaining a global IoT scanning platform. Inspired by the measurement study of large-scale real-world IoT scan records, we land reinforcement learning onto a system capable of smartly scanning IoT devices in a principled way. Through extensive experiments, we disclose key parameters affecting the effectiveness of different scanning strategies, and find that our system would achieve growing advantages with the proliferation of IoT devices.

Multi-Agent Distributed Reinforcement Learning for Making Decentralized Offloading Decisions

Jing Tan (Paderborn University, Germany); Ramin Khalili (Huawei Technologies, Germany); Holger Karl (Hasso Plattner Institute, Germany); Artur Hecker (Huawei, Germany)

0
We formulate computation offloading as a decentralized decision-making problem with selfish agents. We design an interaction mechanism that incentivizes agents to align selfish and system goals by balancing between competition and cooperation. The mechanism provably has Nash equilibria with optimal resource allocation in the static case. For a dynamic environment, we propose a novel multi-agent online learning algorithm that learns with partial, delayed and noisy state information, and a reward signal that reduces information need to a great extent. Empirical results confirm that through learning, agents significantly improve both system and individual performance, e.g., 40% offloading failure rate reduction, 32% communication overhead reduction, up to 38% computation resource savings in low contention, 18% utilization increase with reduced load variation in high contention, and improvement in fairness. Results also confirm the algorithm's good convergence and generalization property in significantly different environments.

Reinforcement Learning for Dynamic Dimensioning of Cloud Caches: A Restless Bandit Approach

Guojun Xiong and Shufan Wang (Binghamton University, USA); Gang Yan (Binghamton University-SUNY, USA); Jian Li (Binghamton University, USA)

0
We study the dynamic cache dimensioning problem, where the objective is to decide how much storage to place in the cache to minimize the total costs with respect to the storage and content delivery latency. We formulate this problem as a Markov decision process, which turns out to be a restless multi-armed bandit problem and is provably hard to solve. For given dimensioning decisions, it is possible to develop solutions based on the celebrated Whittle index policy. However, Whittle index policy has not been studied for dynamic cache dimensioning, mainly because cache dimensioning needs to be repeatedly solved and jointly optimized with content caching. To overcome this difficulty, we propose a low-complexity fluid Whittle index policy, which jointly determines dimensioning and content caching. We show that this policy is asymptotically optimal. We further develop a lightweight reinforcement learning augmented algorithm dubbed fW-UCB when the content request and delivery rates are unavailable. fW-UCB is shown to achieve a sub-linear regret as it fully exploits the structure of the near-optimal fluid Whittle index policy and hence can be easily implemented. Extensive simulations using real traces support our theoretical results.

Session Chair

Xi Chen (Samsung Electronics)

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · © 2022 Duetone Corp.